Goto

Collaborating Authors

 grounded application


A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

Neural Information Processing Systems

Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers.


Reviews: A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

Neural Information Processing Systems

The paper gives a Bayesian interpretation of a potential dropout-like technique for training recurrent neural nets. This research direction is good for its potential influences on theoretical aspect for understanding both RNN and dropout. However, under the proposed framework, the reason why this technique is useful is still a bit unclear. In particular, the paper proposes a mixture prior on the row of weights (line 129), without explaining the benefit of doing so, besides resulting in an interpretation of dropout. Also, much of the interpretation is a simple extension of the previously proposed interpretation of dropout in feedforward case (Gal and Ghahramani, ICML2016) and can hardly be considered as a significant novelty, and the empirical novelty is also diminished because of the previous paper from Moon et al.


A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

Gal, Yarin, Ghahramani, Zoubin

Neural Information Processing Systems

Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity).